翻訳と辞書
Words near each other
・ Foshee, Alabama
・ Foshegu
・ Fosheim Peninsula
・ Foshtanq
・ Forward-forward agreement
・ Forward-looking statement
・ Forward-swept wing
・ Forward/Return
・ Forwarder
・ Forwarding
・ Forwarding agent
・ Forwarding agent (philately)
・ Forwarding equivalence class
・ Forwarding information base
・ Forwarding plane
Forward–backward algorithm
・ Forwood
・ Forwood baronets
・ Foryd Bay
・ Foryd railway station
・ Forza
・ Forza Bastia
・ Forza Campania
・ Forza d'Agrò
・ Forza Europa
・ Forza F.C.
・ Forza Horizon
・ Forza Horizon 2
・ Forza Italia
・ Forza Italia (2013)


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Forward–backward algorithm : ウィキペディア英語版
Forward–backward algorithm
The forward–backward algorithm is an inference algorithm for hidden Markov models which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions o_:= o_1,\dots,o_t, i.e. it computes, for all hidden state variables X_k \in \, the distribution P(X_k\ |\ o_). This inference task is usually called ''smoothing''. The algorithm makes use of the principle of dynamic programming to compute efficiently the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the name ''forward–backward algorithm''.
The term ''forward–backward algorithm'' is also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer but to one specific instance of this class.
==Overview ==
In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all k \in \, the probability of ending up in any particular state given the first k observations in the sequence, i.e. P(X_k\ |\ o_). In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point k, i.e. P(o_\ |\ X_k). These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence:
:P(X_k\ |\ o_) = P(X_k\ |\ o_, o_) \propto P(o_\ |\ X_k) P(X_k\ | \ o_)
The last step follows from an application of the Bayes' rule and the conditional independence of o_ and o_ given X_k.
As outlined above, the algorithm involves three steps:
# computing forward probabilities
# computing backward probabilities
# computing smoothed values.
The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to the ''message-passing'' used in general belief propagation approaches. At each single observation in the sequence, probabilities to be used for calculations at the next observation are computed. The smoothing step can be calculated simultaneously during the backward pass. This step allows the algorithm to take into account any past observations of output for computing more accurate results.
The forward–backward algorithm can be used to find the most likely state for any point in time. It cannot, however, be used to find the most likely sequence of states (see Viterbi algorithm).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Forward–backward algorithm」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.